In my previous post on Tanzu, I explained how easy is to start consuming Kubernetes workloads from within vSphere 7, thanks to the newly introduced “vSphere with Tanzu”.
As discussed, this comes with some limitations but at the same time enables customers to deploy and consume modern apps on a tried and tested platform without the need to invest into more advanced technologies like VSAN or NSX-T.
For those who are ready to take a significant leap and have a richer, more complete experience, then the way to go is “vSphere with Kubernetes”.
While this branding could be quite confusing (and indeed it is), the differences among the two products are quite simple in terms of prerequisites:
- vSphere with Tanzu only requires vSphere 7
- vSphere with Kubernetes requires VCF and therefore NSX-T and VSAN
Being able to leverage Software Defined Networking and Storage, besides the operational advantages brought by the SDDC Manager and the vLCM, vSphere with Kubernetes allows to deploy and support Modern Applications architected mixing together different building blocks: VMs, Containers and Functions. vSphere with Kubernetes is designed from the beginning to serve the needs of both the Developers and the VI/Cloud Admins by empowering them to deploy Kubernetes Clusters, Pods and VMs using their own preferred tool (either kubectl or the vCenter UI/APIs).
This goal is achieved by enabling VMware Cloud Foundation Services on top of VCF: these services, provided by a Supervisor Cluster allow to programmatically or interactively consume objects like VMs, Kubernetes Clusters, (vSphere) Pods but as well Storage, Network and Container Images via Harbor registries.
So, what is a Supervisor Cluster then? Well, A SC is nothing less than any vSphere Cluster in a VCF Workload Domain that has been “Tanzu enabled” by deploying Kubernetes Control Plane VMs (holding the whole set of K8s master nodes components) and “spherelets” (which are VMware’s own implementation of the kubelets, running alongside hostd).
A vSphere Supervisor Cluster node is a traditional ESXi host, a K8s master node and also a worker node, enabled to run traditional VMs, TKG clusters and vSphere Pods. The VI/Cloud Admin would create then namespaces (not in the sanctioned K8s way) to control RBAC and allocation of resources to different teams willing to consume the above workloads (either via vCenter or kubectl).
A couple of words on vSphere Pods and TKG Clusters.
vSphere Pods created within vSphere with Kubernetes are by all means VMs, but different from the usual ones because they run on a runtime called CRX, which is a special VM optimized to run a fine-tuned Linux Kernel and container engine on top of which containers exist. The benefit of this approach is that, while these objects can be instantiated by developers through kubectl using a YAML file, they are seen as VMs from within vCenter, therefore they provide a way of quickly deploying K8s-like workloads without the need of managing a fully formed K8s cluster. It is important to highlight that vSphere Pods require explicitly NSX-T to provide networking, so these can only be used from within vSphere with Kubernetes on VCF and not on vSphere with Tanzu.
TKG clusters are fully compliant K8s clusters made of VMs that run VMware’s own certified K8s distribution, Tanzu Kubernetes Grid. They can be instantiated via Cluster API by developers and appear as VMs to VI/Cloud Admins. Of course, these K8s clusters are fully manageable via Tanzu Mission Control if desired.
To conclude, VCF represents an already powerful platform that can be further enriched by enabling the Tanzu capabilities, becoming the only SDDC solution that can run out of the box any type of Modern Application. vSphere with Kubernetes on VCF makes this possible without having to choose and integrate different solutions or to build multiple competing skillsets, helping to bridge the gap between Devs and Operators.
For additional details, please check VMware presentations on vSphere with Kubernetes at Tech Field Day 21. There, among other things, you can also find additional information on how vSphere with Kubernetes consumes storage and networking: