NetApp opened Tech Field Day 14 with a presentation by Andrew Sullivan and Kapil Arora, entirely focused on the company’s efforts in the Open Source field. One might imagine that a company like NetApp – or any other big IT vendor – would consider Open Source as a menace to their business or, in the best possible scenario, just as a fad worth to exploit until the advent of the next hype. Well, this could have made some sense until just a few years ago but today we live in the GitHub age and it is evident that no company can afford not to share some of their own open code with the public.
NetApp is no different from any other company and they are probably doing this for many reasons similar to those of their competitors – what it is worth investigating here is where their motivation comes from and what is driving their efforts.
NetApp’s involvement with Open Source begins in 2011 with their support to OpenStack and their contribution to the Cinder and Manila projects; the team has evolved – mostly in the past 18 months – into something bigger called the “Open Ecosystem”, also friendly referred to as “The Pub”. The focus has expanded well beyond OpenStack and now covers Containers, Automation and Orchestration, Configuration Management, CI/CD and Agile Development tools.
Should we call all this “DevOps”? If with DevOps we mean a practice that opens communication and collaboration channels between Application Developers and Operations Engineers to “quickly and safely deploy any new line of code”, then the answer is yes. DevOps allows for a different way of architecting, coding, implementing and supporting applications so that they can meet the business demands at the speed dictated by the business itself and do so reliably and safely. This is achieved by removing the gaps between developers and operations/infrastructure teams: developers cannot afford the luxury to wait for Operations to deliver the infrastructure they need, while Operations cannot run the infrastructure disregarding the expectations of their internal customers, as if the Data Center was Ops’ private little Empire. It is not just as simple as this, but you get the idea.
So one might ask, “NetApp is a storage vendor, what they have to do with DevOps?”
I would answer that DevOps is a reality: it is here to stay and willing or not, it has already changed the way applications are built and ran. NetApp, just like any other storage vendor, has the choice to stick to their roots (and slowly die) or to drive the change, not just be carried along by it. At the end of the day NetApp’s goal is to continue sell storage, if this implies opening up to Open Source, then let be it. Driving the change means contributing code upstream to the myriad of Open Source projects that are defining the new landscape. In NetApp’s specific case the goal is to enable developers write and deploy applications that are not only capable of consuming (NetApp) storage but also leverage storage features and capabilities without the intervention of the storage admins, who then can focus on more important tasks than slicing up and presenting LUNs.
Of course, not only NetApp will benefit from “driving up the stack”, but a healthier and richer ecosystem is an opportunity for all the parties involved. Most of the code that NetApp contributes to Docker, Kubernetes, Chef, Puppet, Ansible, Jenkins or JFrog Artifactory is incorporated in these projects with the purpose of enabling developers to consume storage for their applications depending less and less from the underlying infrastructure, allowing for application mobility, faster deployment times, performance assurance, replicability and so on. The beauty of the Open Source model is that anyone can contribute; there is no reason why Kubernetes code created by NetApp cannot be merged with improvements coming from, let’s say, EMC or HPE if they deem so. Docker, Kubernetes etc. could soon include new storage capabilities because of code contributions from competing vendors and anyone would benefit from this community approach. It is the beauty of Open Source and, if this helps selling a few more arrays, why not?
NetApp’s (and SolidFire’s) Open Source efforts hub is “The Pub” (I love the brewing analogy): all the Open Source code and documentation is accessible from GitHub and available for tinkering; so, if you have any interest in contributing, I would encourage you to start from The Pub. Some of the published projects worth mentioning are modules for Ansible and Puppet, Chef cookbooks, Jenkins plugins, a Docker Volume Plugin (nDVP) and the new Trident storage orchestration engine for Kubernetes.
Kapil did a thorough introduction and a demo of Trident at TFD and I would suggest you to go watch the videos on TFD’s website for details. In a few words, Trident allows Kubernetes to dynamically claim and present NetApp persistent storage volumes to Apps, independently of the Pods they belong to. In Kubernetes, persistent storage is provided through PVs – Persistent Volumes that can be claimed by applications through a PVC – Persistent Volume Claim. In Static Provisioning, the Storage Admin would prepare a set of PVs ready to be claimed through PVCs: this approach is not very efficient because volumes must be created beforehand by the storage admins and there are chances for storage waste (e.g. a 80 GB PVC is made for a 100 GB PV, so 20 GB are not utilized). Trident, on the other hand, allows for Dynamic Provisioning, which does not require PVs to be created before a PVC as they will be instantiated on demand, as long as the storage admins had previously defined Storage Classes. This construct describes PV characteristics based on storage (IOPS, QoS, etc) or application (Prod, QA, Dev, etc) features allowing a PVC to claim a PV according to certain policies defined in the Storage Classes; this clearly allows for true self-service and avoids waste of storage space. Kubernetes comes with out of the box Dynamic Provisioning support for Cinder, AWS EBS, GCE PD, GlusterFS and Ceph; Trident enables Kubernetes to leverage the same persistent storage provisioning technique with NetApp storage. This is a clear example of how it is possible to (reliably) speed up application delivery times empowering developers to consume storage resources while storage admins won’t have to be distracted from more important tasks by volume provisioning requests.
Wrapping up, I think the effort NetApp is putting into its Open Ecosystem initiative is remarkable and well-motivated: for sure, Andrew and Kapil did a great job in explaining The Pub’s role in NetApp’s strategy to Tech Field Day delegates. I would encourage you to visit The Pub and start playing with the code and I hope to see more and more of these initiatives from other vendors too.
Disclaimer: I have been invited to Tech Field Day 14 by Gestalt IT who paid for travel, hotel, meals and transportation. I did not receive any compensation to attend TFD and I am under no obligation whatsover to write any content related to TFD. The contents of these blog posts represent my personal opinions about the products and solutions presented during TFD14.
Pingback: #TFD14 Recap – NetApp Open Ecosystem - Tech Field Day