After discussing the technical side of VMworld 2017 in my previous posts, it is finally time to shift focus to my favorite topic: networking and community.

The real added value of VMworld is the infinite opportunities to interact with peers and build your own professional and personal network. I remember my first VMworld in 2011: I was nobody and I knew nobody. In 2017 I can list the achievements I “unlocked” as a direct consequence of that trip to Copenhagen:

1. I started attending VMUG meetings
2. I became a VMUG Leader and got the VMUG President’s Award in 2016!
3. I became a vEXPERT
4. I landed a job with VMware (although I am not there anymore, that was a life achievement!)
5. I started blogging
6. I became a Tech Field Day delegate
7. I attended VMworld 2017 as an Official Blogger
8. I learned a lot and became a way better professional
9. I met and shook hands with the best minds in the industry
10. I made lots of friends in the community <== This one’s the best!

Going to that VMworld in 2011 on my own money and leave days (my employer at the time wasn’t interested in having me attend conferences) was the best career and personal investment I could do, it all started from there.

More »

Share Button

Being a long time VMworld attendee, I have learned with experience that Break Out sessions, although being of incredible educational value, can cannibalize all your available time if prioritized over other VMworld activities. Also, most if not all of the sessions are recorded and made available after VMworld for easy fruition from the comfort of your couch. For this reason I set a personal rule not to attend more than two sessions a day. I broke this rule this year and for one reason: VMware Cloud on AWS.

More »

Share Button

The two VMworld 2017 editions (USA and Europe) are traditionally a couple of months away from each other; this always guaranteed staggered announcements and two events with distinct identity and purpose. Not this year, since the two events were held only a copule of weeks away from each other: this caused the audience to have understandably reduced expectations from the General Sessions, which have been perceived as a replay of what had been showcased in Las Vegas.

More »

Share Button

Finally back home and well rested after a hectic and super-packed week in Barcelona, it’s time to collect the ideas and put together a comprehensive set of VMworld Europe 2017 recap posts. I will begin with an introductory one then break my stream of thoughts into more small posts to avoid overwhelming you with too much info all at once.

I have been a regular VMworld attendee since 2011, I only skipped the 2015 edition since… well… that was during my last week as a VMware employee. Since my first VMworld, I have seen it evolving and I have evolved too (immensely!) as an IT professional, so I can consider myself a veteran able to bring home the most from this experience.

VMworld is kind of a routine for me: the General and Break-Out Sessions, the Solutions Exchange, the Parties and, most importantly, the Networking part. The available time is always limited so I planned carefully a tightly packed agenda, to be sure I could be able to do all that was in my list. I wore different hats this time… first I represented my organization, so I had to be sure to gather and bring back information relevant to my day job, then – once again – I was there as a VMUG Leader and finally, as an official VMworld Blogger. It wasn’t easy to fill so many roles at the same time, but I think I managed it, at the expense of giving up on proper food and enough hours of sleep!

More »

Share Button

I have written a couple of posts (here and here) about Datrium around Tech Field Day 14 back in May;  at that time I was intrigued by their fresh and unusual approach to resolve the challenges associated with both the traditional “non-converged” and the “hyper-converged” infrastructure philosophies, but at the very same time I expressed my concerns about the maturity of their solution. I was eager to see their promising technology mature and today I am very pleased to acknowledge the efforts that Datrium have been making since that day, away from the glamour of the spotlight. Datrium just made a big push and only three months after their TFD showcase they introduced not one but two major technology updates.

More »

Share Button

OpenIO is a very young company with a history already behind it: although on the market only since 2015, the company’s founders started developing the core technology back in 2006, as part of a project for a major Telco. The code was open-sourced in 2012, then forked and finally productized and presented to customers in its current form. OpenIO is based in Lille, France, with offices in San Francisco and Tokyo and plans for expansion in the next coming months.

OpenIO’s proposition could be quickly and very unfairly labeled as YAOSS – Yet Another Object Storage Solution, while in reality it is way more than just that. To better understand why, let’s start from a very high level description of the current state of the storage market, the typical use cases for object storage systems and how they are quickly evolving.

More »

Share Button

A few years after their introduction, HyperConverged systems are now a reality, slowly but steadily eroding market shares from the traditional server/array platforms. They promised scalability, ease of deployment, operational and architectural simplification. While they mostly delivered on those promises, HCI systems introduced some new limitations and pain points. Probably the most relevant is a consequence of the uniqueness of HCI systems’ architecture where multiple identical nodes – each one providing compute, storage and data services – are pooled together. This induces an inter-dependency between them, as VM data must be available on different nodes at the same time to guarantee resiliency in case of failure. Consequently, HCI nodes are not stateless: inter-node, east-west communications are required to guarantee that data resiliency policies are applied. Unfortunately, this statefulness also has other consequences: when a node is brought down, either by a fault or because of a planned maintenance task, so is the storage that comes with it and data must be rebuilt or relocated to ensure continued operations.

More »

Share Button

ClearSky is a Boston-based startup founded in 2014 by industry veterans Lazarus Vekiarides and Ellen Rubin; ClearSky comes with a unique proposition, which – if successful – might revolutionize the way primary storage is consumed. I introduced ClearSky in my previous TFD14 preview article where I described their solution; the objective is to reduce drastically the Data Center footprint of traditional primary storage by shifting it to the Cloud while at the same time simplifying DR operations and ensuring accessibility of data from any location. This outcome seemed to be impossible to achieve due to the strict latency requirements that primary storage inherently carries, but ClearSky has found an elegant and effective solution to this conundrum. However, there is one caveat here and it will be evident in the following paragraph.

More »

Share Button

This article is a follow up to my TFD14 Turbonomic preview; at that time I knew very little about Turbonomic and that post was a collection of thoughts and impressions I gathered looking at the product from a distance. I am happy to say that after the TFD presentation, my understanding of the solution is clearer and the initial good impressions are confirmed.

Turbonomic is – in their own words – an “Autonomic Platform”; the play on words here is the merge between Automation and Economy, that is because Turbonomic uses the “Supply Chain” metaphor, where every element in the infrastructure “buys” resources from the underlying components and “sells” upstream, leveraging at the same time automation to ensure that the apps are always performing in their “Desired State”.

The objective is to “assure the applications performance” regardless of where the app is running (in the Private, Public or Hybrid Cloud). Coming from an operations background I know well how difficult it is to keep an infrastructure running within ideal parameters: any single intervention – no matter how apparently insignificant – leads to an imbalance in the infrastructure and this, in turn, leads to a deviation from those optimal parameters. What happens is that app performances are less predictable and corrective actions must be taken to return to the “Desired State”. This is what is called the “Break-Fix” loop, which requires continuous human intervention.

More »

Share Button

NetApp opened Tech Field Day 14 with a presentation by Andrew Sullivan and Kapil Arora, entirely focused on the company’s efforts in the Open Source field. One might imagine that a company like NetApp – or any other big IT vendor – would consider Open Source as a menace to their business or, in the best possible scenario, just as a fad worth to exploit until the advent of the next hype. Well, this could have made some sense until just a few years ago but today we live in the GitHub age and it is evident that no company can afford not to share some of their own open code with the public.

NetApp is no different from any other company and they are probably doing this for many reasons similar to those of their competitors – what it is worth investigating here is where their motivation comes from and what is driving their efforts.

NetApp’s involvement with Open Source begins in 2011 with their support to OpenStack and their contribution to the Cinder and Manila projects; the team has evolved – mostly in the past 18 months – into something bigger called the “Open Ecosystem”, also friendly referred to as “The Pub”. The focus has expanded well beyond OpenStack and now covers Containers, Automation and Orchestration, Configuration Management, CI/CD and Agile Development tools.

More »

Share Button