This post is a follow-up to my previous one written in August after my participation to Cloud Field Day 4. In that post, after a brief introduction of Cohesity and the problems their technology solves, I went deep dive on the Cloud-specific features showcased at CFD4.

As a matter of fact, just a few days after returning from CFD4, Cohesity made an impactful announcement, presenting Cohesity Helios. Back then I did not have the time to look into the announcement and write about Helios, but attending a private briefing (presented by Rawlinson Rivera) at VMworld Europe 2018 gave me the opportunity to focus on the solution and briefly report about it.

More »

Cohesity: a short intro

Cohesity, since its foundation in 2013, has become a popular name in the Enterprise Storage vendor landscape; although initially Cohesity might have been labeled like “just another backup vendor”, this misplaced and simplistic description has certainly been very unfair to them. Cohesity’s completeness of vision goes way beyond that of being just another backup solution provider, putting them instead at the forefront of the “Battle for Secondary Storage”.

The problem that Cohesity is trying to solve is one that is unfortunately very common: the sprawl of unmanaged, uncorrelated and often unused secondary copies of data endlessly generated by organizations. Multiple copies of the same data are created for backups, archives, test and dev, analytics and DR purposes, resulting into unmanageable, inefficient and complex data siloes. Cohesity can ingest all this data, consolidate it efficiently into one single logical container and make it available for any possible use you might think of. Cohesity is a true DataPlatform meant to enable efficient use of secondary storage. While this goal was initially achieved with software defined, hyper-convergent, scalable appliances, the next inevitable step for Cohesity was to abstract the platform’s capabilities from “the iron” and to develop a Virtual Edition of DataPlatform to address ROBO and IoT use cases and, lastly, a Cloud Edition capable of running on AWS, Azure and Google Cloud. All of these implementations share the same distinctive SpanFS File System and the same API-driven, policy-based management interface, enabling Cohesity’s capabilities to extend to any location your data lives on.

More »

I came across Aviatrix for the first time a few months ago, while I was knee-deep in the preparation of AWS Associate Exams and at the same time researching for a cloud migration project. AWS networking was a major topic of the exams and also an important research area for my assignment at work. It was very clear to me from the very beginning that Cloud Networking is inherently different from traditional networking. Of course, they share the very same foundations but designing and managing networks in any Public Cloud is a very different business than doing the same in your Data Center. In the Cloud there are no routers or switches you can log into, there are no console cables nor SFP connectors, but you have VPCs that you can literally spin up with a few lines of code with all their bells and whistles (including security policies for the workloads they contain).

This implies a few considerations. First and foremost, the expectations of Cloud Engineers are very different from those of Network Engineers: Cloud Engineers can set up VPCs in minutes but they can be easily frustrated by their on-prem Network counterparts lagging weeks behind to provide VPN connectivity and BGP route distribution to the Data Center. Then there is the skills gap to be filled: Cloud Engineering Teams are usually small and manned by all-round technologists rather than specialists, very often there is no Network Guru in Cloud Teams capable of citing RFCs by memory, so there is a need to keep things simple, yet they must work “as they should”. Finally, in Public Clouds is very easy to lose control and become victims of the VPC sprawl; managing Cloud Networking at scale is probably the biggest challenges of all.

More »

After discussing the technical side of VMworld 2017 in my previous posts, it is finally time to shift focus to my favorite topic: networking and community.

The real added value of VMworld is the infinite opportunities to interact with peers and build your own professional and personal network. I remember my first VMworld in 2011: I was nobody and I knew nobody. In 2017 I can list the achievements I “unlocked” as a direct consequence of that trip to Copenhagen:

1. I started attending VMUG meetings
2. I became a VMUG Leader and got the VMUG President’s Award in 2016!
3. I became a vEXPERT
4. I landed a job with VMware (although I am not there anymore, that was a life achievement!)
5. I started blogging
6. I became a Tech Field Day delegate
7. I attended VMworld 2017 as an Official Blogger
8. I learned a lot and became a way better professional
9. I met and shook hands with the best minds in the industry
10. I made lots of friends in the community <== This one’s the best!

Going to that VMworld in 2011 on my own money and leave days (my employer at the time wasn’t interested in having me attend conferences) was the best career and personal investment I could do, it all started from there.

More »

Being a long time VMworld attendee, I have learned with experience that Break Out sessions, although being of incredible educational value, can cannibalize all your available time if prioritized over other VMworld activities. Also, most if not all of the sessions are recorded and made available after VMworld for easy fruition from the comfort of your couch. For this reason I set a personal rule not to attend more than two sessions a day. I broke this rule this year and for one reason: VMware Cloud on AWS.

More »

The two VMworld 2017 editions (USA and Europe) are traditionally a couple of months away from each other; this always guaranteed staggered announcements and two events with distinct identity and purpose. Not this year, since the two events were held only a copule of weeks away from each other: this caused the audience to have understandably reduced expectations from the General Sessions, which have been perceived as a replay of what had been showcased in Las Vegas.

More »

Finally back home and well rested after a hectic and super-packed week in Barcelona, it’s time to collect the ideas and put together a comprehensive set of VMworld Europe 2017 recap posts. I will begin with an introductory one then break my stream of thoughts into more small posts to avoid overwhelming you with too much info all at once.

I have been a regular VMworld attendee since 2011, I only skipped the 2015 edition since… well… that was during my last week as a VMware employee. Since my first VMworld, I have seen it evolving and I have evolved too (immensely!) as an IT professional, so I can consider myself a veteran able to bring home the most from this experience.

VMworld is kind of a routine for me: the General and Break-Out Sessions, the Solutions Exchange, the Parties and, most importantly, the Networking part. The available time is always limited so I planned carefully a tightly packed agenda, to be sure I could be able to do all that was in my list. I wore different hats this time… first I represented my organization, so I had to be sure to gather and bring back information relevant to my day job, then – once again – I was there as a VMUG Leader and finally, as an official VMworld Blogger. It wasn’t easy to fill so many roles at the same time, but I think I managed it, at the expense of giving up on proper food and enough hours of sleep!

More »

I have written a couple of posts (here and here) about Datrium around Tech Field Day 14 back in May;  at that time I was intrigued by their fresh and unusual approach to resolve the challenges associated with both the traditional “non-converged” and the “hyper-converged” infrastructure philosophies, but at the very same time I expressed my concerns about the maturity of their solution. I was eager to see their promising technology mature and today I am very pleased to acknowledge the efforts that Datrium have been making since that day, away from the glamour of the spotlight. Datrium just made a big push and only three months after their TFD showcase they introduced not one but two major technology updates.

More »

OpenIO is a very young company with a history already behind it: although on the market only since 2015, the company’s founders started developing the core technology back in 2006, as part of a project for a major Telco. The code was open-sourced in 2012, then forked and finally productized and presented to customers in its current form. OpenIO is based in Lille, France, with offices in San Francisco and Tokyo and plans for expansion in the next coming months.

OpenIO’s proposition could be quickly and very unfairly labeled as YAOSS – Yet Another Object Storage Solution, while in reality it is way more than just that. To better understand why, let’s start from a very high level description of the current state of the storage market, the typical use cases for object storage systems and how they are quickly evolving.

More »

A few years after their introduction, HyperConverged systems are now a reality, slowly but steadily eroding market shares from the traditional server/array platforms. They promised scalability, ease of deployment, operational and architectural simplification. While they mostly delivered on those promises, HCI systems introduced some new limitations and pain points. Probably the most relevant is a consequence of the uniqueness of HCI systems’ architecture where multiple identical nodes – each one providing compute, storage and data services – are pooled together. This induces an inter-dependency between them, as VM data must be available on different nodes at the same time to guarantee resiliency in case of failure. Consequently, HCI nodes are not stateless: inter-node, east-west communications are required to guarantee that data resiliency policies are applied. Unfortunately, this statefulness also has other consequences: when a node is brought down, either by a fault or because of a planned maintenance task, so is the storage that comes with it and data must be rebuilt or relocated to ensure continued operations.

More »