Information
This documentation supports the 20.08 version of Remedy Single Sign-On, which is available only to BMC Helix subscribers (SaaS).To view an earlier version, select the version from the Product version menu.

Remedy SSO as a service


Starting from the 1702 release, BMC supports Remedy Single Sign-On as a cloud native application. Remedy SSO as a cloud native application, Remedy SSO provides the following features:

  • Multi tenancy
  • Continuous delivery  
  • Automated deployment
  • Inbuilt monitoring to keep a check on the system health
  • Scale in and scale out facility depending on the system load 

To install Kubernetes

The Remedy SSO cloud native application is based on Kubernetes (K8). K8 can manage clusters dynamically.

The first step is to create a K8 cluster in a set of machines.

The K8 infrastructure consists of the K8 master nodes, which can be 1 or more, and a set of slave nodes (1-n). Each node needs to install Kubelet which manages the K8 and docker for running docker instances.

The master contains:

  • Control plane, that is, API server
  • etcd, kube-scheduler
  • Controller manager

The slave nodes contains daemon process such as kube-proxy.

There are many instructions for creating and instantiating a K8 cluster, but we use kubeadm. The installation is automated using the following script.

  1. Run the following ansible script on the master node.
# yum install git ansible
# git clone https://github.com/alvinhom/ansible-kubeadm-cluster.git

2. Edit INVENTORY with the master and slave name of the cluster. Also, edit ssh/sudo credentials.

3. Run ansible.

$ ansible-playbook cluster.yml -i INVENTORY

4.On CLM machines, perform the following steps.

# yum install epel-release on master node
# yum upgrade

5. On every node, plus you need to restart the operation system as an upgrade may upgrade the kernel.

# yum install epel-release
# yum upgrade
# shutdown -r now

6. Perform this step if registry is using a self-signed certificate. On every node import docker registry certificate.

# exmple for access.bmc.com - internal docker registry
curl -k https://access.bmc.com/ca -o /etc/pki/ca-trust/source/anchors/access.bmc.com.crt
curl -k https://access.bmc.com/ca -o /etc/pki/ca-trust/source/anchors/access.crt
sudo update-ca-trust
sudo /bin/systemctl restart docker.service

7. If ssh connection error appears - run with flag

 --ssh-extra-args="-o StrictHostKeyChecking=no"

Warning

Important

Make sure that system time on each node is synchronized

Deploying the infrastructure

In order to operate services in a cloud native fashion, there are number of common infrastructure services that are leveraged by Remedy SSO cloud native application. Note that the infrastructure can be used  by any service that is developed and deployed like Remedy SSO cloud native application.

The infrastructure components that are installed are:

  • Centralized Logging Infrastructure (ELK)
    • Elastic Search
    • FluentD
    • Kibana
  • Metrics Collection and Dashboard
    • Heapster (K8 metrics)
    • InfluxDB
    • Grafana

In order to deploy the common infrastructure components, run the deploy_infrastructure.sh script that is part of the rsso-automation project. There are a bunch of environment variables that needed to be provided for things like DB credentials, HTTPS certificates. Set the environment variables as needed before running the deployment script.

The environment variables are as follows:

Environment variable

Description

DOCKER_REGISTRY_SERVER, DOCKER_USER, DOCKER_PASSWORD

  • URL of the private docker registry installed in the RoD environment
  • Docker user name and password

RSSO_DB_USER, RSSO_DB_PASSWD

User name and password for Remedy SSO postgres DB.

RSSO_HTTPS_KEY_FILE, RSSO_HTTPS_CERT_FILE

  • These are file location of the HTTPS CERT key file and cert file.
  • If using self signed certificates, you can create it using the openssl command:
    • openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=*.onbmc.com"
    • Note the common name should be * (or a specific hostname) with the domain name.  This is used in the Ingress controller to match certificate names.

Deploying Remedy SSO

The following image shows the Remedy SSO service deployment architecture.

Remedy SSO deployment architecture.png

The components are deployed by the K8 YAML descriptor files in the k8s directory. It includes the following components:

  • Nginx ingress controller
    • Used to handle sticky session routing and ingress policies.
    • All external application should refer to Remedy SSO service via the URL of the ingress controller that is deployed.
    • Nginx controllers are deployed as Kubernetes Daemonset and only on nodes marked with the "ingress=true" key. To mark the node use:
      kubectl label nodes <node-name> ingress=true
      It allocates a host port (80) for all incoming traffic.  Routing is performed by Ingress controller from the root URL.
  • Remedy SSO service
    • K8 service definition which creates an service endpoint for Remedy SSO.
  • Remedy SSO pods (Specified by rsso-deployment.yaml)
    • K8 pods containing the Remedy SSO docker container.
    • Pods are created in a K8 deployment so changes can be rollback as needed.
    • There is a K8 Horizontal Pod Autoscaler (HPA) which monitors and manages the autoscaling in the cluster.
      • Default replication of 2.
      • Autoscale if overall CPU utilization is above 80% (total in cluster)
    • There are health check probes and liveness probe defined on the Remedy SSO pods.
  • FluentD Daemonset to stream logs to ELK
    • Daemonset is a pod that runs on every K8 host
    • The FluentD agent monitors logs that are generated from every K8 pod running on the host and streams the logs to Elastic Search Cluster.
    • By default, only the stdout/stderr files are captured by the FluentD daemon.
    • Remedy SSO deploys a fluentd pod on Remedy SSO pods to additionally send rsso.log and tomcat access log files to ELK.
  • Postgres DB
    • There is a HA Postgres DB that is created by the deployment.

The CD deployment pipeline will automatically deploy any changes from Remedy SSO service to K8. For production, a script will be provided to deploy Remedy SSO service.

Accessing Kubernetes and Infrastructure Services

You can view the status, metrics, and logs of services that are deployed on K8 through various dashboards. The following sections provides information on how to gain access to these dashboards. 

Kubernetes dashboard

You can view the status of services that are deployed on K8 through the K8 dashboard. The URL is dependent on the cluster configuration. Perform the following steps to access the dashboard.

  1. Run the following shell command.

    kubectl get svc kubernetes-dashboard --namespace=kube-system -o jsonpath='{.spec.ports[0].nodePort}'
  2. Gain access to the dashboard entering http://hostname:port  in the browser, where hostname is the master K8 node name and port that you obtained through the kubectl command.

Kibana dashboard

You can view the log data stored in ELK through the Kibana dashboard. The URL is dependent on the cluster configuration. Perform the following steps to access the dashboard.

kubectl --namespace=kube-system get service kibana-logging  -o jsonpath='{.spec.ports[0].nodePort}'

Grafana dashboard

You can view the metric information collected by K8 cluster and Remedy SSO service through the Grafana dashboard. The URL is dependent on the cluster configuration. Perform the following steps to access the dashboard.

kubectl --namespace=kube-system get service monitoring-grafana  -o jsonpath='{.spec.ports[0].nodePort}'

To check Remedy SSO service

  • Use K8 dashboard and select "Default" namespace and access Services → rsso-service.
    • Check the number of pods running and view logs.
  • Use kubectl command line
    • kubectl get svc rsso-service
    • kubectl get deployment rsso-deployment

 

 

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

Remedy Single Sign-On 20.08