This post is a follow up to last week’s one on Dell Technologies PowerStore, which was featured – among other products from the vendor’s portfolio – as part of Tech Field Day’s special two days event. Here we will focus on VxRail and I will briefly go through a few things that raised my interest, once again with a specific emphasis on its integration with the VMware stack. As usual, if you want to learn more, my recommendation is to head to the TFD event page where you can find plenty of videos, including demos, where the solution and its capabilities are presented in detail.
Brief but necessary intro: Tech Field Day recently held a two-day special event focused on Dell Technologies storage and compute/HCI solutions, specifically PowerStore, PowerMax and VxRail (with bonus content on APEX and CloudIQ). Most of the topics were relevant to VMware admins and architects as the interoperability of Dell’s solutions with the vSphere (and above) stack was a common theme that accompanied the audience for the whole duration of the event.
If you want to watch the many presentations and the very cool demos, all the videos are available as usual on the event page for your async consumption. In this post (or perhaps series?) I’d like to gather a few ideas and impressions I collected as a delegate. This by no means will be an extensive and detailed analysis as I will stick to what impressed me, bookmarking concepts for my own reference. If you want to dig deeper or check other specific topics I am not touching here, go ahead and check the online videos, the nice people at Tech Field Day will appreciate it!
More »In my previous post on Tanzu, I explained how easy is to start consuming Kubernetes workloads from within vSphere 7, thanks to the newly introduced “vSphere with Tanzu”.
As discussed, this comes with some limitations but at the same time enables customers to deploy and consume modern apps on a tried and tested platform without the need to invest into more advanced technologies like VSAN or NSX-T.
For those who are ready to take a significant leap and have a richer, more complete experience, then the way to go is “vSphere with Kubernetes”.
More »At VMworld 2019 VMware announced “Project Pacific”, officially entering the Enterprise Kubernetes market and putting an end to the speculations that had been running wild about vSphere becoming a platform for native Kubernetes workloads.
The Tanzu branding was introduced at the same time, revealing a whole portfolio of solutions covering the complex life-cycle of Modern Applications, from development and build, to operations and management. A number of products all branded as Tanzu were presented, either coming from recent acquisitions, the re-branding of existing solutions or the development of new ones. This caused some initial confusion among customers about what Tanzu really was about: put simply, Tanzu is an “umbrella” beneath which VMware positioned the many solutions aimed at building and running modern applications, not just on-prem but on any public cloud, with the same level of experience regardless of their location.
One of the highlights of TFD19 was the visit at VMware’s Palo Alto HQ to hear the latest from the Cloud Management Business Unit. The day was split in two, with the first half focused on the latest advancements of vRealize Operations Manager (a.k.a. vROps) and the last part completely dedicated to Cloud Automation Services (CAS).
Both sessions were demo-heavy and focused more around showing the real capabilities of the products rather than killing the audience with endless PowerPoint decks. John Dias and Cody De Arkland did a terrific job in presenting their respective solution, I recommend you to visit the TechField Day website and watch the videos: seeing is believing.
Both topics were equally interesting. From my point of view and being a long time vROps user, John’s presentation was useful for taking notes of the “what’s new” features to be tested soon back at work. After an exhausting TFD week, I saved what was left of my energies to focus on CAS. Below are some of my thoughts on it.
More »Introduction to RPA
The acronym RPA, which stands for “Robotic Process Automation”, identifies a relatively young type of technology that is becoming more and more popular across the IT industry. While RPA solutions have been available on the market for almost two decades now, their level of maturity has reached a point where they are now widely adopted in almost any business area.
But what is the purpose of RPA? To condense it in just a few lines: RPA is a set of technologies and tools that aims at multiplying the effectiveness of human workers by partnering them with a digital counterpart capable of automating or augmenting the execution of any type of workflow or business process.
More »Cohesity: a short intro
Cohesity, since its foundation in 2013, has become a popular name in the Enterprise Storage vendor landscape; although initially Cohesity might have been labeled like “just another backup vendor”, this misplaced and simplistic description has certainly been very unfair to them. Cohesity’s completeness of vision goes way beyond that of being just another backup solution provider, putting them instead at the forefront of the “Battle for Secondary Storage”.
The problem that Cohesity is trying to solve is one that is unfortunately very common: the sprawl of unmanaged, uncorrelated and often unused secondary copies of data endlessly generated by organizations. Multiple copies of the same data are created for backups, archives, test and dev, analytics and DR purposes, resulting into unmanageable, inefficient and complex data siloes. Cohesity can ingest all this data, consolidate it efficiently into one single logical container and make it available for any possible use you might think of. Cohesity is a true DataPlatform meant to enable efficient use of secondary storage. While this goal was initially achieved with software defined, hyper-convergent, scalable appliances, the next inevitable step for Cohesity was to abstract the platform’s capabilities from “the iron” and to develop a Virtual Edition of DataPlatform to address ROBO and IoT use cases and, lastly, a Cloud Edition capable of running on AWS, Azure and Google Cloud. All of these implementations share the same distinctive SpanFS File System and the same API-driven, policy-based management interface, enabling Cohesity’s capabilities to extend to any location your data lives on.
I came across Aviatrix for the first time a few months ago, while I was knee-deep in the preparation of AWS Associate Exams and at the same time researching for a cloud migration project. AWS networking was a major topic of the exams and also an important research area for my assignment at work. It was very clear to me from the very beginning that Cloud Networking is inherently different from traditional networking. Of course, they share the very same foundations but designing and managing networks in any Public Cloud is a very different business than doing the same in your Data Center. In the Cloud there are no routers or switches you can log into, there are no console cables nor SFP connectors, but you have VPCs that you can literally spin up with a few lines of code with all their bells and whistles (including security policies for the workloads they contain).
This implies a few considerations. First and foremost, the expectations of Cloud Engineers are very different from those of Network Engineers: Cloud Engineers can set up VPCs in minutes but they can be easily frustrated by their on-prem Network counterparts lagging weeks behind to provide VPN connectivity and BGP route distribution to the Data Center. Then there is the skills gap to be filled: Cloud Engineering Teams are usually small and manned by all-round technologists rather than specialists, very often there is no Network Guru in Cloud Teams capable of citing RFCs by memory, so there is a need to keep things simple, yet they must work “as they should”. Finally, in Public Clouds is very easy to lose control and become victims of the VPC sprawl; managing Cloud Networking at scale is probably the biggest challenges of all.
A few years after their introduction, HyperConverged systems are now a reality, slowly but steadily eroding market shares from the traditional server/array platforms. They promised scalability, ease of deployment, operational and architectural simplification. While they mostly delivered on those promises, HCI systems introduced some new limitations and pain points. Probably the most relevant is a consequence of the uniqueness of HCI systems’ architecture where multiple identical nodes – each one providing compute, storage and data services – are pooled together. This induces an inter-dependency between them, as VM data must be available on different nodes at the same time to guarantee resiliency in case of failure. Consequently, HCI nodes are not stateless: inter-node, east-west communications are required to guarantee that data resiliency policies are applied. Unfortunately, this statefulness also has other consequences: when a node is brought down, either by a fault or because of a planned maintenance task, so is the storage that comes with it and data must be rebuilt or relocated to ensure continued operations.
ClearSky is a Boston-based startup founded in 2014 by industry veterans Lazarus Vekiarides and Ellen Rubin; ClearSky comes with a unique proposition, which – if successful – might revolutionize the way primary storage is consumed. I introduced ClearSky in my previous TFD14 preview article where I described their solution; the objective is to reduce drastically the Data Center footprint of traditional primary storage by shifting it to the Cloud while at the same time simplifying DR operations and ensuring accessibility of data from any location. This outcome seemed to be impossible to achieve due to the strict latency requirements that primary storage inherently carries, but ClearSky has found an elegant and effective solution to this conundrum. However, there is one caveat here and it will be evident in the following paragraph.