This post is a follow up to last week’s one on Dell Technologies PowerStore, which was featured – among other products from the vendor’s portfolio – as part of Tech Field Day’s special two days event. Here we will focus on VxRail and I will briefly go through a few things that raised my interest, once again with a specific emphasis on its integration with the VMware stack. As usual, if you want to learn more, my recommendation is to head to the TFD event page where you can find plenty of videos, including demos, where the solution and its capabilities are presented in detail.
Brief but necessary intro: Tech Field Day recently held a two-day special event focused on Dell Technologies storage and compute/HCI solutions, specifically PowerStore, PowerMax and VxRail (with bonus content on APEX and CloudIQ). Most of the topics were relevant to VMware admins and architects as the interoperability of Dell’s solutions with the vSphere (and above) stack was a common theme that accompanied the audience for the whole duration of the event.
If you want to watch the many presentations and the very cool demos, all the videos are available as usual on the event page for your async consumption. In this post (or perhaps series?) I’d like to gather a few ideas and impressions I collected as a delegate. This by no means will be an extensive and detailed analysis as I will stick to what impressed me, bookmarking concepts for my own reference. If you want to dig deeper or check other specific topics I am not touching here, go ahead and check the online videos, the nice people at Tech Field Day will appreciate it!More »
One of the highlights of TFD19 was the visit at VMware’s Palo Alto HQ to hear the latest from the Cloud Management Business Unit. The day was split in two, with the first half focused on the latest advancements of vRealize Operations Manager (a.k.a. vROps) and the last part completely dedicated to Cloud Automation Services (CAS).
Both sessions were demo-heavy and focused more around showing the real capabilities of the products rather than killing the audience with endless PowerPoint decks. John Dias and Cody De Arkland did a terrific job in presenting their respective solution, I recommend you to visit the TechField Day website and watch the videos: seeing is believing.
Both topics were equally interesting. From my point of view and being a long time vROps user, John’s presentation was useful for taking notes of the “what’s new” features to be tested soon back at work. After an exhausting TFD week, I saved what was left of my energies to focus on CAS. Below are some of my thoughts on it.More »
Introduction to RPA
The acronym RPA, which stands for “Robotic Process Automation”, identifies a relatively young type of technology that is becoming more and more popular across the IT industry. While RPA solutions have been available on the market for almost two decades now, their level of maturity has reached a point where they are now widely adopted in almost any business area.
But what is the purpose of RPA? To condense it in just a few lines: RPA is a set of technologies and tools that aims at multiplying the effectiveness of human workers by partnering them with a digital counterpart capable of automating or augmenting the execution of any type of workflow or business process.More »
A few years after their introduction, HyperConverged systems are now a reality, slowly but steadily eroding market shares from the traditional server/array platforms. They promised scalability, ease of deployment, operational and architectural simplification. While they mostly delivered on those promises, HCI systems introduced some new limitations and pain points. Probably the most relevant is a consequence of the uniqueness of HCI systems’ architecture where multiple identical nodes – each one providing compute, storage and data services – are pooled together. This induces an inter-dependency between them, as VM data must be available on different nodes at the same time to guarantee resiliency in case of failure. Consequently, HCI nodes are not stateless: inter-node, east-west communications are required to guarantee that data resiliency policies are applied. Unfortunately, this statefulness also has other consequences: when a node is brought down, either by a fault or because of a planned maintenance task, so is the storage that comes with it and data must be rebuilt or relocated to ensure continued operations.
ClearSky is a Boston-based startup founded in 2014 by industry veterans Lazarus Vekiarides and Ellen Rubin; ClearSky comes with a unique proposition, which – if successful – might revolutionize the way primary storage is consumed. I introduced ClearSky in my previous TFD14 preview article where I described their solution; the objective is to reduce drastically the Data Center footprint of traditional primary storage by shifting it to the Cloud while at the same time simplifying DR operations and ensuring accessibility of data from any location. This outcome seemed to be impossible to achieve due to the strict latency requirements that primary storage inherently carries, but ClearSky has found an elegant and effective solution to this conundrum. However, there is one caveat here and it will be evident in the following paragraph.
This article is a follow up to my TFD14 Turbonomic preview; at that time I knew very little about Turbonomic and that post was a collection of thoughts and impressions I gathered looking at the product from a distance. I am happy to say that after the TFD presentation, my understanding of the solution is clearer and the initial good impressions are confirmed.
Turbonomic is – in their own words – an “Autonomic Platform”; the play on words here is the merge between Automation and Economy, that is because Turbonomic uses the “Supply Chain” metaphor, where every element in the infrastructure “buys” resources from the underlying components and “sells” upstream, leveraging at the same time automation to ensure that the apps are always performing in their “Desired State”.
The objective is to “assure the applications performance” regardless of where the app is running (in the Private, Public or Hybrid Cloud). Coming from an operations background I know well how difficult it is to keep an infrastructure running within ideal parameters: any single intervention – no matter how apparently insignificant – leads to an imbalance in the infrastructure and this, in turn, leads to a deviation from those optimal parameters. What happens is that app performances are less predictable and corrective actions must be taken to return to the “Desired State”. This is what is called the “Break-Fix” loop, which requires continuous human intervention.
NetApp opened Tech Field Day 14 with a presentation by Andrew Sullivan and Kapil Arora, entirely focused on the company’s efforts in the Open Source field. One might imagine that a company like NetApp – or any other big IT vendor – would consider Open Source as a menace to their business or, in the best possible scenario, just as a fad worth to exploit until the advent of the next hype. Well, this could have made some sense until just a few years ago but today we live in the GitHub age and it is evident that no company can afford not to share some of their own open code with the public.
NetApp is no different from any other company and they are probably doing this for many reasons similar to those of their competitors – what it is worth investigating here is where their motivation comes from and what is driving their efforts.
NetApp’s involvement with Open Source begins in 2011 with their support to OpenStack and their contribution to the Cinder and Manila projects; the team has evolved – mostly in the past 18 months – into something bigger called the “Open Ecosystem”, also friendly referred to as “The Pub”. The focus has expanded well beyond OpenStack and now covers Containers, Automation and Orchestration, Configuration Management, CI/CD and Agile Development tools.
In this third and last post of this Tech Field Day 14 Preview Series, I will focus on Datrium. Truth to be told, a fourth vendor was added at the last minute to the list of TFD14 presenters, and that is NetApp; interestingly enough their presentation will be DevOps oriented and I will report my impressions in a future post when I am back from Boston.
Back to Datrium then. Like myself, this will be the first appearance of Datrium at Tech Field Day, so there was no “TFD prior art” in the form of old presentation recordings I could leverage to get myself acquainted with their solutions, therefore my research was limited to browsing their company website. I hope I got everything right, but I can tell you what I found there was enough to tickle my curiosity; they seem to have an interesting approach to the solution of the converged data center problem, their own buzzword to define this is “Open Convergence”. What I see there is a mix of ideas already heard of before, but even if the ingredients are familiar, the recipe is different and the serving looks yummy! Enough with the gastronomic analogy, let’s talk tech.
This second Tech Field Day 14 Preview post is focused on Turbonomic. I must confess I know very little about their solution and I am very anxious to hear more from these guys when I will meet them in Boston: I did some research in the past few days about Turbonomic and I definitely feel I need to learn more about thir product. Very much looking forward to be enlightened!
Coming from a vROPs background, I kind of assumed – most likely wrongly – that Turbonomic was a direct competitor of VMware’s solution, but from what I have seen so far, although there are for sure some similarities and overlapping areas, here we are talking about two completely different beasts, so I will leave the comparisons there. More »