Pre-Deployment requirements


If the NFS server is used for storage requirement, you must first create the nfs-client-provisioner pod.

Prepare for nfs sharing

TSAC installer with kubernetes 1.18.x – 1.23.x creates persistent volume dynamically on NFS server mount location. In order to do this, it needs to create PVCs and use it for deploying pods.

Following are the steps to achieve same. >> kubectl apply -f <xxx.yaml>

1. Create a role and access to the user

kind: ServiceAccount

apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

2. Create a storage class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: bmc.com/nfs
parameters:
  archiveOnDelete: "false"

3. Create a pod for nfs provisioning on the nfs server

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  selector:
    matchLabels:
      app: nfs-client-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: bmc.com/nfs
            - name: NFS_SERVER
              value: vl-aus-domdv028.bmc.com
            - name: NFS_PATH
              value: /data/ade-stack/export
      volumes:
        - name: nfs-client-root
          nfs:
            server: vl-aus-domdv028.bmc.com
           path: /data/ade-stack/export

Make sure you get following pod listed in get pods list -
>>kubectl get pods --all-namespaces | grep provision
default nfs-client-provisioner-5d8d8cbb49-6w44m 1/1 Running 0 5h57m

nginx installation on a separate virtual machine to manage incoming traffic

yum install nginx.

make sure that host is accessible on 80 port.

https://www.cyberciti.biz/faq/how-to-install-and-use-nginx-on-centos-7-rhel-7/

Once its up and accessible do following changes:-

cat nginx.conf under /etc/nginx/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
#include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

stream {
include /etc/nginx/conf.d/stream/*.conf;
}

create following path if don;t exist -/etc/nginx/conf.d/stream/default.conf

Storage class creation

create a storageclass.yaml with following details. It create the storage class svv-nfs-storage

apiVersion: storage.k8s.io/v1

kind: StorageClass
metadata:
name: svv-nfs-storage
selfLink: /apis/storage.k8s.io/v1/storageclasses/svv-nfs-storage
parameters:
archiveOnDelete: "false"
provisioner: bmc.com/nfs
reclaimPolicy: Delete
volumeBindingMode: Immediate

>> kubectl apply -f <storageclass.yaml>

Create the Kubernetes secret with a domain certificate (example: dsmlab)

kubectl -n challengers123 create secret generic pem --from-file=tls.crt="fullchain.pem" --from-file=tls.key="privkey.pem" --dry-run -o yaml | kubectl -n challengers123 apply -f -

kubectl create secret tls tls-dsmlab --key privkey.pem --cert fullchain.pem

Update the nginx daemon set

kubectl get ds -n ingress-nginx
- shows existing daemon set.

kubectl edit ds nginx-ingress-controller -n ingress-nginx
- Edit ngress-controller ds
- add following line under args at the end -
- --default-ssl-certificate=default/tls-dsmlab

Manually creating persistent volumes

If you use Rancher Kubernetes with underlying Kubernetes 1.26.x (Rancher version v2.7.5), you must manually create persistent volumes in Rancher. You must create three persistent volumes for the Redis server and one persistent volume for the connector service. When you create the persistent volumes, please ensure that you provide the following details:

  • Volume plugin: Select NFS Share
  • Path: Specify the NFS storage location
  • Server: Specify the NFS server name
  • Read Only: No
  • Access Mode: Select Single Node Read-Write
  • Assign to Storage Class: Specify the storage class created using the yml file

RSSO Certificate (Only For RSSO Integration)

If you are planning for RSSO based login, please make sure RSSO Server certificate is copied at following location.

<Install-Location>/onprem-deployment-manager\configs\external\certs

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*