After discussing the technical side of VMworld 2017 in my previous posts, it is finally time to shift focus to my favorite topic: networking and community.

The real added value of VMworld is the infinite opportunities to interact with peers and build your own professional and personal network. I remember my first VMworld in 2011: I was nobody and I knew nobody. In 2017 I can list the achievements I “unlocked” as a direct consequence of that trip to Copenhagen:

1. I started attending VMUG meetings
2. I became a VMUG Leader and got the VMUG President’s Award in 2016!
3. I became a vEXPERT
4. I landed a job with VMware (although I am not there anymore, that was a life achievement!)
5. I started blogging
6. I became a Tech Field Day delegate
7. I attended VMworld 2017 as an Official Blogger
8. I learned a lot and became a way better professional
9. I met and shook hands with the best minds in the industry
10. I made lots of friends in the community <== This one’s the best!

Going to that VMworld in 2011 on my own money and leave days (my employer at the time wasn’t interested in having me attend conferences) was the best career and personal investment I could do, it all started from there.

More »

Share Button

Being a long time VMworld attendee, I have learned with experience that Break Out sessions, although being of incredible educational value, can cannibalize all your available time if prioritized over other VMworld activities. Also, most if not all of the sessions are recorded and made available after VMworld for easy fruition from the comfort of your couch. For this reason I set a personal rule not to attend more than two sessions a day. I broke this rule this year and for one reason: VMware Cloud on AWS.

More »

Share Button

The two VMworld 2017 editions (USA and Europe) are traditionally a couple of months away from each other; this always guaranteed staggered announcements and two events with distinct identity and purpose. Not this year, since the two events were held only a copule of weeks away from each other: this caused the audience to have understandably reduced expectations from the General Sessions, which have been perceived as a replay of what had been showcased in Las Vegas.

More »

Share Button

Finally back home and well rested after a hectic and super-packed week in Barcelona, it’s time to collect the ideas and put together a comprehensive set of VMworld Europe 2017 recap posts. I will begin with an introductory one then break my stream of thoughts into more small posts to avoid overwhelming you with too much info all at once.

I have been a regular VMworld attendee since 2011, I only skipped the 2015 edition since… well… that was during my last week as a VMware employee. Since my first VMworld, I have seen it evolving and I have evolved too (immensely!) as an IT professional, so I can consider myself a veteran able to bring home the most from this experience.

VMworld is kind of a routine for me: the General and Break-Out Sessions, the Solutions Exchange, the Parties and, most importantly, the Networking part. The available time is always limited so I planned carefully a tightly packed agenda, to be sure I could be able to do all that was in my list. I wore different hats this time… first I represented my organization, so I had to be sure to gather and bring back information relevant to my day job, then – once again – I was there as a VMUG Leader and finally, as an official VMworld Blogger. It wasn’t easy to fill so many roles at the same time, but I think I managed it, at the expense of giving up on proper food and enough hours of sleep!

More »

Share Button

I have written a couple of posts (here and here) about Datrium around Tech Field Day 14 back in May;  at that time I was intrigued by their fresh and unusual approach to resolve the challenges associated with both the traditional “non-converged” and the “hyper-converged” infrastructure philosophies, but at the very same time I expressed my concerns about the maturity of their solution. I was eager to see their promising technology mature and today I am very pleased to acknowledge the efforts that Datrium have been making since that day, away from the glamour of the spotlight. Datrium just made a big push and only three months after their TFD showcase they introduced not one but two major technology updates.

More »

Share Button

A few years after their introduction, HyperConverged systems are now a reality, slowly but steadily eroding market shares from the traditional server/array platforms. They promised scalability, ease of deployment, operational and architectural simplification. While they mostly delivered on those promises, HCI systems introduced some new limitations and pain points. Probably the most relevant is a consequence of the uniqueness of HCI systems’ architecture where multiple identical nodes – each one providing compute, storage and data services – are pooled together. This induces an inter-dependency between them, as VM data must be available on different nodes at the same time to guarantee resiliency in case of failure. Consequently, HCI nodes are not stateless: inter-node, east-west communications are required to guarantee that data resiliency policies are applied. Unfortunately, this statefulness also has other consequences: when a node is brought down, either by a fault or because of a planned maintenance task, so is the storage that comes with it and data must be rebuilt or relocated to ensure continued operations.

More »

Share Button

ClearSky is a Boston-based startup founded in 2014 by industry veterans Lazarus Vekiarides and Ellen Rubin; ClearSky comes with a unique proposition, which – if successful – might revolutionize the way primary storage is consumed. I introduced ClearSky in my previous TFD14 preview article where I described their solution; the objective is to reduce drastically the Data Center footprint of traditional primary storage by shifting it to the Cloud while at the same time simplifying DR operations and ensuring accessibility of data from any location. This outcome seemed to be impossible to achieve due to the strict latency requirements that primary storage inherently carries, but ClearSky has found an elegant and effective solution to this conundrum. However, there is one caveat here and it will be evident in the following paragraph.

More »

Share Button

With the release of vSphere 6.5 there is really no reason anymore to stick with the old school Windows vCenter; finally the vCenter Server Appliance (VCSA) has become a first class citizen and the preferred implementation solution for the most important vSphere infrastructure compnent. This was already made clear by VMware when in 2016 they released a tool to easily migrate a Windows vCenter 5.5 to VCSA 6.0 U2 and with vSphere 6.5 the migration from an older Windows vCenter server is one of the officially supported upgrade paths.

Although not new – it was inherited from v 6.0 – one of the best features of the VCSA 6.5 is it’s ease of upgrade. I tested it myself in the homelab taking advantage of the first maintenance release (6.5a) which was released last week bringing support to NSX 6.3.

More »

Share Button

PernixData has announced today the General Availability of both their flagship products, FVP 3.5 and Architect 1.1.

For those not familiar with PernixData technology, FVP is is the world’s first, and only, enterprise-class, server-side storage intelligence platform, embedded in the hypervisor to provide reliable I/O performance enhancements to virtual machines (VMs) on existing primary storage. In a nutshell, PernixData FVP virtualizes server-side flash and server RAM across all hosts, connecting the high-speed server-side resources into existing VM I/O paths, to transparently reduce the IOPS burden on a storage system, de-facto decoupling storage capacity and performances accelerating any VMware based application. Architect provides real-time analytics (descriptive, predictive, and prescriptive) for optimal storage and VM design, acting as a proactive, strategic data center management tool that continually generates new data based on dynamic VM and infrastructure conditions. More »

Share Button

When it was launched, vRealize Operations Manager was immediately perceived by its user base as a complete rework of its predecessor, vCenter Operations Manager. Changes were introduced not only in terms of features and capabilities, but also in the product’s architecture. Having hit version 6.2 and incorporating even some functionalities inherited by Hyperic, vROps is now definitely a mature product, which makes it an essential and indispensable component of any modern VMware virtualization infrastructure.

In this article I will try to cover most of the design considerations that need to be made when facing a vROps implementation scenario; I don’t mean to cover all the facets of the “vROps Design Dilemma”, neither will I go too much in depth analyzing all the possible design considerations. Nevertheless I hope to give you enough food for thought to succeed with your vROps implementation.

More »

Share Button