Deploying Black Duck in Azure Kubernetes Service (AKS) (archived)
Table of Contents
Introduction
This page describes how to deploy Black Duck in Azure Kubernetes Service. Install Black Duck in AKS by using /wiki/spaces/BDLM/pages/34373652, a tool that deploys and manages Synopsys software in Kubernetes and Red Hat OpenShift environments.
Prerequisites
Ensure that your environment meets the following requirements:
- Cluster conformance, permissions, and sizing:
- Black Duck is supported only on specific versions of Kubernetes and Kubernetes clusters that are described as being conformant.
- Your cluster must also grant necessary permissions, and have sufficient computational resources.
- Before you deploy Black Duck in AKS, ensure that you are able to provision a cluster that meets all the prerequisites specified in the /wiki/spaces/BDLM/pages/65765594 .
Known Compatibility Issues
Before you proceed with a Black Duck deployment in AKS leveraging either Azure Files or Azure Database for PostgreSQL, please contact your authorized Synopsys support representative for caveats and recommendations.Black Duck does not support all file systems and PostgreSQL-compatible external databases .
For example, there are known inherent incompatibilities between Azure Files and the internal containerized Postgres database shipped with Black Duck.
Also, in some versions of Black Duck, there are known compatibility issues with Azure Database for PostgreSQL.
- Azure environment
Your Azure environment must provide the following elements:- Access to Azure to create and manage the following items (high-level):
- Resource groups
- Kubernetes service
- Service principle
- Storage
- You must install the Azure CLI (click here for instructions)
- Access to Azure to create and manage the following items (high-level):
Advanced topics
Please note that Kubernetes is a complex ecosystem and many factors must be considered when creating a cluster (e.g., storage types (IOPS), future scaling, high availability, redundancy, backup, disaster recovery).
This document does not cover these advanced topics; the intent is to assist in provisioning a running cluster with persistent storage (default or managed-premium) so that Black Duck can be deployed.
To fulfill more complex requirements, please consult your organization's cloud administrator.
If you have questions about whether any aspect of your underlying infrastructure is supported, please contact your authorized Synopsys support representative.
Creating a Kubernetes cluster considerations
The commands used in this document create the Azure Kubernetes cluster with recommended default configuration values.
You might need to modify some of these example values to accommodate your organization's needs.
Consider the following options when you set up your Kubernetes cluster and choose the best option; while observing company policies:
- Resource group - Use a separate resource group for the cluster, or use an existing one
- Service principal - You can choose to specify your own service principal.
- Role-based access - Do you require role based access to the cluster?
Persistent storage - For production, use persistent volumes and consider the different levels of service Azure offers.
The two pre-canned options are 'default' and 'managed-premium' with different IOPS and costs associated, but you can configure other options.Azure File storage is not supported by the Black Duck internal PostgreSQL database container.
- Namespaces - You can specify your own K8S namespaces for Synopsys Operator and Black Duck.
- Database - Please note that using Azure Database for PostgreSQL as an external database is not supported.
This following example configures a PostgreSQL container within the cluster to store the Black Duck data.
Creating a Kubernetes cluster
To create a Kubernetes cluster, use the following steps:
- Login and verify CLI access
Use the following command to check that you have access to the CLI:
az --version
Login to Azure (this will open a browser to authenticate):
az login
Creating the resource group
This is an optional step because you might want to place your cluster in an existing resource group depending on how you organize your Azure infrastructure.
By using a separate resource group for test environments provide the advantage of ensuring that if you delete the resource group when testing is complete, everything relating to that group is deleted.
Create the resource group (change name and location):az group create \ --name <your resource group name> \ --location <your location>
Refer to the Azure Region documentation for more information about providing a location. For services and regions, refer to available services by region. At a minimum, you need Azure Kubernetes Services.
Creating the service principal
The service principal is required by the Kubernetes cluster. You can generate one automatically during cluster creation or specify one.
The following command gives you an appId and password in the JSON result:az ad sp create-for-rbac \ --name <your cluster name> \ --output json \ --skip-assignment
Creating the cluster
You might need to change the number of workers and the size of machines to match your requirements.
The following cluster-creation command works for a basic small Black Duck installation.
Replace the <appId> and <password> with the values outputted from the generation of the service principal.
The following command might take some time to run:az aks create \ --name <your cluster name> \ --resource-group <your resource group name> \ --node-count 3 \ --node-vm-size Standard_D4s_v3 \ --service-principal <appId> \ --client-secret <password> \ --generate-ssh-keys
This example creates three Standard_D4s_v3 worker servers (4 VCPUS, 16GB RAM each). Click here for sizing information.
When you choose a Kubernetes version, make sure to select a Kubernetes version that is supported by Black Duck.
One benefit of using Azure Kubernetes Service (AKS) is that Azure manages the master nodes for you, so you do not need to provision servers to take on this role.
Therefore, all machines you provide to the cluster are used for running the pods (Black Duck services in this case) added to the cluster.
Kubernetes reserves twenty percent of the memory on each server (up to 4GB maximum) - click here for more details.Install kubectl
To manage your Kubernetes cluster you need the kubectl tool, which you install by using the following command in the Azure CLI.sudo az aks install-cli
Skip this step if you have already installed kubectl.
Log in to your cluster.
You must connect your kubectl tool to your cluster. The Azure CLI configures your kubectl tool with the az aks get-credentials command:az aks get-credentials \ --name <your cluster name> \ --resource-group <your resource group name>
Verify your cluster is working by using kubectl.
Verify the correct number of nodes:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION aks-nodepool1-23971845-0 Ready agent 4m v1.11.9 aks-nodepool1-23971845-1 Ready agent 3m v1.11.9 aks-nodepool1-23971845-2 Ready agent 3m v1.11.9
Verify the Kubernetes pods are running:
$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-5d6f9b846c-c5p6g 2/2 Running 0 5m kube-system kube-dns-autoscaler-746998ccf6-kgk2z 1/1 Running 0 5m kube-system kube-dns-v20-7c7d7d4c66-qdn7s 4/4 Running 0 2m kube-system kube-dns-v20-7c7d7d4c66-xmt4h 4/4 Running 0 5m kube-system kube-proxy-kwbbj 1/1 Running 0 2m kube-system kube-proxy-s7tk2 1/1 Running 0 2m kube-system kube-proxy-vrml9 1/1 Running 0 2m kube-system kube-svc-redirect-9pqxh 2/2 Running 0 2m kube-system kube-svc-redirect-svbgt 2/2 Running 0 2m kube-system kube-svc-redirect-tbddv 2/2 Running 0 2m kube-system kubernetes-dashboard-67bdc65878-tc6zq 1/1 Running 0 5m kube-system metrics-server-5cbc77f79f-jndx6 1/1 Running 0 5m kube-system tunnelfront-d96d8db4d-2x6px 1/1 Running 0 5m
Verify the Azure default storage classes are available:
$ kubectl get sc NAME PROVISIONER AGE default (default) kubernetes.io/azure-disk 7m managed-premium kubernetes.io/azure-disk 7m
At this stage, the Azure Kubernetes (AKS) cluster is up and running and you can connect and manage by using kubectl from your machine.
Configuring persistent storage
When you create a production Black Duck instance, Syopsys recommends that you use persistent volumes for Black Duck data storage.
To read more about Black Duck's requirements for persistent volumes, see the /wiki/spaces/BDLM/pages/65831270 page.
When you deploy Black Duck using the steps below, ensure that you reference your desired persistent-volume configuration by verifying that the pvcStorageClass is set to 'default'.
You might want to change this to 'managed-premium' for faster disk access speeds, which results in a higher cost.
Configuring an external database (not recommended by Synopsys)
Although you can configure Black Duck to function with an external PostgreSQL database, Synopsys does not recommend this for Black Duck installations in Azure Kubernetes Service because of incompatibilities between some versions of Black Duck and Azure Database for PostgreSQL.
Synopsys recommends that you use the containerized PostgreSQL database that is shipped by default with Black Duck on Azure Kubernetes Service.
If, for non-production uses, you want to experiment with Azure DB for PostgreSQL, refer to Configuring Azure database for PostgreSQL.
Deploying Black Duck
Now that your cluster is ready for Black Duck, you can proceed with the Black Duck deployment. The installation procedure for Black Duck on AKS is the same as Black Duck installation process for Kubernetes (click /wiki/spaces/BDLM/pages/65765594 for information).
To deploy Black Duck to with Azure Database for PostgreSQL (not recommended by Synopsys), you must use Black Duck version 2019.4.2 or later, and you must ensure that you set the following values during the Black Duck deployment.
- --external-postgres-host <<DB FULLY-QUALIFIED HOSTNAME>>
- --external-postgres-port 5432
- --external-postgres-admin blackduck
- --external-postgres-user blackduck_user
- --external-postgres-admin-password <<ADMIN-PASSWORD>>
- --external-postgres-user-password <<USER-PASSWORD>>
- --external-postgres-ssl=true
- --environs HUB_POSTGRES_CONNECTION_USER:blackduck_user@<<DB HOSTNAME>>,HUB_POSTGRES_CONNECTION_ADMIN:blackduck@<<DB HOSTNAME>>
Post-installation steps:
The following steps are post-installation tasks:
Verifying the Installation
Synopsys Operator handles the deployment of Black Duck, which includes initializing the internal database and supplying your license.
When you deploy the Black Duck instance, use the kubectl get svc
command to view an EXTERNAL-IP.
In the following examples, the namespace is called blackduck-pvc, and the Black Duck instance is available at 'https://40.117.146.187'.
$ kubectl get pods -n blackduck-pvc NAME READY STATUS RESTARTS AGE authentication-q6cqh 1/1 Running 0 25m cfssl-tgxht 1/1 Running 0 25m documentation-rvpv5 1/1 Running 0 25m jobrunner-2b4tx 1/1 Running 0 25m postgres-5sf7c 1/1 Running 0 27m registration-7hrxv 1/1 Running 0 25m scan-zqmkj 1/1 Running 0 25m solr-kh52t 1/1 Running 0 25m webapp-logstash-l8gj5 2/2 Running 0 25m webserver-kpb6d 1/1 Running 0 25m zookeeper-rjxmr 1/1 Running 0 25m $ kubectl get svc -n blackduck-pvc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE authentication ClusterIP 10.0.254.197 <none> 8443/TCP 26m cfssl ClusterIP 10.0.46.84 <none> 8888/TCP 26m documentation ClusterIP 10.0.220.36 <none> 8443/TCP 26m logstash ClusterIP 10.0.199.5 <none> 5044/TCP 26m postgres ClusterIP 10.0.37.158 <none> 5432/TCP 27m registration ClusterIP 10.0.223.28 <none> 8443/TCP 26m scan ClusterIP 10.0.84.64 <none> 8443/TCP 26m solr ClusterIP 10.0.137.160 <none> 8983/TCP 26m webapp ClusterIP 10.0.45.187 <none> 8443/TCP 26m webserver ClusterIP 10.0.117.32 <none> 443/TCP 26m webserver-lb LoadBalancer 10.0.116.42 40.117.146.187 443:31820/TCP 26m webserver-np NodePort 10.0.46.94 <none> 443:30183/TCP 26m zookeeper ClusterIP 10.0.103.253 <none> 2181/TCP 26m
Viewing the Black Duck user interface
To obtain command-line access to your cluster, use the following command:
az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name ${CLUSTER_NAME}
Click /wiki/spaces/BDLM/pages/65765904 for instructions about the resulting externally-accessible IP address.
Deleting the resources
If you wish to delete the resources you created, use the following steps:
To delete the Black Duck instance (assuming it is named blackduck-pvc), use the following command:
kubectl delete blackduck blackduck-pvc
To delete Synopsys Operator, use the following command:
cd synopsys-operator-2019.1.0/install/kube/ ./cleanup.sh synopsys-operator
To delete the cluster, use the following command:
az aks delete --resource-group synopsys-blackduck-test --name synopsys-blackduck-test-cluster --yes --no-wait
To delete the resource group (optional)
Important: Delete the resource group only if you created the resource group for this cluster.az group delete --yes --resource-group synopsys-blackduck-test --no-wait
Remove the service principal by running the following shell script:
# Remove the Active Directory application(s) created. APPS=$(az ad app list --display-name $CLUSTER_NAME --query [].appId --output tsv) for app in $APPS; do az ad app delete --id $app done #The script above should remove the service principal itself
©2020 Synopsys, Inc. All Rights Reserved