## Kubectl-vsphere Commands
```bash
# login to server
kubectl-vsphere login --insecure-skip-tls-verify --server 10.10.0.6
```
```bash
# get current Versions of kubernetes available in namespace
kubectl get virtualmachineimages
```
```bash
# get current Versions of kubernetes availabile in namespace with better formating
kubectl get tanzukubernetesreleases \
-o custom-columns="NAME:.metadata.name,VERSION:.spec.version,READY:.status.phase"
```
```bash
# Get current context
kubectl config current-context
```
```bash
# Get versioned ClusterClasses
kubectl get clusterclass -n osn-namespace
```
Available ClusterClasses:
```
NAME AGE
builtin-generic-v3.1.0 12h
builtin-generic-v3.2.0 12h
builtin-generic-v3.3.0 12h
tanzukubernetescluster 12h
```
```bash
# Get Clusters
kubectl get clusters -n osn-namespace
```
```bash
# Get Machines
kubectl get machines -n osn-namespace
```
## Creating CAPI-styled TKG Cluster
### Cluster Classes
with TKG 3.2.0 and later they introduced Custom ClusterClasses.
To use those Cluster Classes we will need to translate some variables and add our Tanzu Cluster deployment yaml file.
```bash
# Get currently available Cluster classes
kubectl -n osn-namespace get clusterclasses
```
```bash
# Example Output
NAME AGE
builtin-generic-v3.1.0 2d18h
builtin-generic-v3.2.0 2d18h
builtin-generic-v3.3.0 2d18h
tanzukubernetescluster 2d18h
```
As you can see in the example output we still have the old Clusterclass `tanzukubernetescluster` available to us.
It is generally advised to us the new versioned clusterlasses `builtin-generic-v*` by either creating a ClusterClass yourself or just using the generic one.
Example using the `builtin-generic-v3.3.0` clusterClass
```yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: tanzu-cluster-1
namespace: osn-namespace
spec:
clusterNetwork:
services:
cidrBlocks: ["198.53.100.0/16"]
pods:
cidrBlocks: ["192.0.5.0/16"]
serviceDomain: "lab.sponar.de"
topology:
class: builtin-generic-v3.3.0
version: v1.32.0---vmware.6-fips-vkr.2
controlPlane:
replicas: 3
workers:
machineDeployments:
- class: node-pool
name: node-pool-1
replicas: 3
variables:
- name: vmClass
value: best-effort-medium
- name: storageClass
value: k8s
- name: vsphereOptions
value:
persistentVolumes:
defaultStorageClass: k8s
```
### Setup
login to the supervisor:
```bash
# login to server
kubectl-vsphere login --insecure-skip-tls-verify --server 192.168.40.194
```
Apply the yml
```bash
kubectl apply -f Tanzu-Cluster-1/tanzu-k8s-cluster.yaml
```
Wait for the deployment to finish, check the Status:
```bash
kubectl get clusters -n osn-namespace
```
```bash
# Example output, Cluster is deployed, see Phase = Provisioned:
NAME CLUSTERCLASS PHASE AGE VERSION
tanzu-cluster-1 builtin-generic-v3.3.0 Provisioned 18m v1.32.0+vmware.6-fips
```
### login to cluster with kubectl-vsphere
```bash
# login with the cluster as argument
kubectl-vsphere login --insecure-skip-tls-verify --server 192.168.54.101 --tanzu-kubernetes-cluster-name tanzu-cluster-1 --tanzu-kubernetes-cluster-namespace osn-namespace
```
### Login to TKG Cluster with kubeconfig
Get kubeconfig secret:
```bash
kubectl get secrets -n osn-namespace
```
```bash
# Look for the following line:
tanzu-cluster-1-kubeconfig Opaque 1 5m
```
extract kubeconfig secret:
```
kubectl get secret tanzu-cluster-1-kubeconfig \
-n osn-namespace \
-o jsonpath='{.data.value}' | base64 -d > tanzu-cluster-1.kubeconfig
```
export the kubeconfig path
```bash
export KUBECONFIG=~/Documents/Git/homelab/Tanzu/Tanzu-Cluster-1/tanzu-cluster-1.kubeconfig
```
check if the Kubeconfig works by checking for available namespaces and switching the context:
```bash
kubectl config get-contexts
```
```bash
# Example Output:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* tanzu-cluster-1-admin@tanzu-cluster-1 tanzu-cluster-1 tanzu-cluster-1-admin
```
## Create Storageclass with Retain
Per default the Storageclass that is created when the Cluster is deployed will have the `reclaimPolicy` to be `Delete`.
This can cause issues when deploying services and accidentatelly deleting the pvc.
To allow for pvcs to be retained even when deleting the pvc, we will copy the existing Storage Class and create a new one with `reclaimPolicy` to be Retain.
```bash
# Output current available storage Classes
kubectl get sc
# Output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
k8s (default) csi.vsphere.vmware.com Delete Immediate true 13h
k8s-latebinding csi.vsphere.vmware.com Delete WaitForFirstConsumer true 13h
k8s-retain (default) csi.vsphere.vmware.com Retain Immediate true 5s
```
```bash
# Copy existing "k8s" storage Class into a new yaml file
kubectl get storageclass k8s -o yaml > sc-retain.yaml
```
```bash
# edit the exported yaml
vim sc-retain.yaml
```
```yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
creationTimestamp: "2025-06-08T19:45:50Z"
labels:
isSyncedFromSupervisor: "yes"
name: k8s-retain # Change the Name of the SC to prevent duplicate naming
resourceVersion: "309"
uid: 6095be7a-aad7-4909-b1a8-8382e1edcbec
parameters:
svStorageClass: k8s
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Retain # Change from Delete to Retain
volumeBindingMode: Immediate
```
```bash
# Apply the new sc yaml
kubectl apply -f sc-retain.yaml
```
## Example Deployment of nginx
create a new deployment file for nginx, example:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: bitnami/nginx:latest
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
env:
- name: NGINX_HTTP_PORT_NUMBER
value: "8080"
- name: NGINX_LISTEN_ADDRESS
value: "0.0.0.0"
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default
spec:
type: LoadBalancer
loadBalancerIP: 192.168.54.200
selector:
app: nginx
ports:
- port: 8080
targetPort: 8080
protocol: TCP
```
apply the yaml file:
```bash
kubectl apply -f nginx-test.yaml
```
Check for the deployment status
```bash
kubectl get deployments
```
Troubleshooting:
```bash
# Get logs of deployment
kubectl describe deployment nginx -n default
# Get logs of default namespace
kubectl get events -n default
```
Get Pods:
```bash
kubectl get pods -n default
```
## Example Helm Deployment of Netbox
Before starting we will create a new namespace for Netbox
```bash
# create Namespace for netbox
kubectl create namespace netbox
```
Then we will install helm and add the netbox helm repository
```bash
# Install helm, for example using brew
brew install helm
```
```bash
# Add netbox repository to helm
helm repo add netbox https://charts.netbox.oss.netboxlabs.com/
helm repo update
```
## Concepts
| Concept | Role |
| ------------- | ---------------------------------------------------------------- |
| Namespace | Resource boundary + access control in Supervisor (or K8s itself) |
| K8s Cluster | Made of control plane + worker nodes; runs your apps |
| Control Plane | Brain of the cluster; manages everything |
| Worker Node | Muscles of the cluster; actually runs the apps (pods) |
| TKG Cluster | A guest Kubernetes cluster deployed within a vSphere Namespace |
## 🔗Resources
### Broadcom vSphere Supervisor Resources
- [[Broadcom)](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere-supervisor/8-0/installing-and-configuring-vsphere-supervisor/deploy-a-one-zone-supervisor/deploy-a-supervisor-with-vds-networking.html|Supervisor deployment guide (Broadcom)]]
- https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere-supervisor/8-0/using-tkg-service-with-vsphere-supervisor/provisioning-tkg-service-clusters/using-the-cluster-v1beta1-api/v1beta1-example-default-cluster.html
### HAProxy
- [HAProxy Installation Guide](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere-supervisor/8-0/installing-and-configuring-vsphere-supervisor/networking-for-vsphere-with-tanzu/install-and-configure-the-haproxy-load-balancer.html)
- [Broadcom KB IP boot fix](https://knowledge.broadcom.com/external/article?articleId=377393)
### Installation Guides
- https://vtam.nl/2022/10/23/vsphere-with-tanzu-on-vds-with-haproxy/
- https://little-stuff.com/2023/05/06/creating-a-tanzu-kubernetes-cluster-in-vsphere-8-with-tanzu/